Goto

Collaborating Authors

 The Future


Epstein's shadow: Why Bill Gates pulled out of Modi's AI summit

Al Jazeera

Epstein's shadow: Why Bill Gates pulled out of Modi's AI summit Microsoft founder Bill Gates has cancelled his keynote speech at India's flagship AI summit just hours before he was due to take the stage on Thursday. Gates, who has faced renewed scrutiny over his past ties to the late sex offender Jeffrey Epstein, withdrew to "ensure the focus remains on the AI Summit's key priorities", the Gates Foundation said in a statement. India's Prime Minister Narendra Modi had billed the summit as an opportunity for India to shape the future of AI, drawing high-profile attendees, including French President Emmanuel Macron and Brazilian President Luiz Inacio Lula da Silva. Instead, it has been dogged by controversy, from Gates's abrupt exit to an incident in which an Indian university tried to pass off a Chinese-made robotic dog as its own innovation. So, what exactly went wrong at India's flagship AI gathering and why has it drawn such intense scrutiny?


World leaders discuss AI future at India's global summit in New Delhi

Al Jazeera

World leaders discuss AI future at India's global summit in New Delhi The fourth, and most high-profile, day of a global artificial intelligence summit in India is under way with world leaders such as United Nations chief Antonio Guterres and French President Emmanuel Macron taking the floor to discuss how to handle the fast-advancing technology that is prompting investment enthusiasm and deep concern in equal measure. The huge gathering in New Delhi is the fourth in a series of international AI meetings that have been taking place since 2023 in France, South Korea and the United Kingdom. Job disruption, child safety and regulations have topped the agenda of this year's edition. The UN chief called on tech tycoons to support a $3bn global fund to ensure open access to the fast-advancing technology for all. The French president also spoke of needing deep involvement: "The message I have come to convey is what is that we are determined to continue to shape the rules of the game, and to do with our allies such as India," Macron said. "Europe is not blindly focused on regulation - Europe is a space for innovation and investment, but it is a safe space."


A New Movie About George Orwell and em 1984 /em Has a Unique Way of Telling Its Story. It May Haunt You.

Slate

Movies Why an Oscar-Nominated Filmmaker Used A.I. to Make His New Documentary Enter your email to receive alerts for this author. You can manage your newsletter subscriptions at any time. You're already subscribed to the aa_Sam_Adams newsletter. You can manage your newsletter subscriptions at any time. We encountered an issue signing you up.


Making AI Inevitable: Historical Perspective and the Problems of Predicting Long-Term Technological Change

Fisher, Mark, Severini, John

arXiv.org Artificial Intelligence

This study demonstrates the extent to which prominent debates about the future of AI are best understood as subjective, philosophical disagreements over the history and future of technological change rather than as objective, material disagreements over the technologies themselves. It focuses on the deep disagreements over whether artificial general intelligence (AGI) will prove transformative for human society; a question that is analytically prior to that of whether this transformative effect will help or harm humanity. The study begins by distinguishing two fundamental camps in this debate. The first of these can be identified as "transformationalists," who argue that continued AI development will inevitably have a profound effect on society. Opposed to them are "skeptics," a more eclectic group united by their disbelief that AI can or will live up to such high expectations. Each camp admits further "strong" and "weak" variants depending on their tolerance for epistemic risk. These stylized contrasts help to identify a set of fundamental questions that shape the camps' respective interpretations of the future of AI. Three questions in particular are focused on: the possibility of non-biological intelligence, the appropriate time frame of technological predictions, and the assumed trajectory of technological development. In highlighting these specific points of non-technical disagreement, this study demonstrates the wide range of different arguments used to justify either the transformationalist or skeptical position. At the same time, it highlights the strong argumentative burden of the transformationalist position, the way that belief in this position creates competitive pressures to achieve first-mover advantage, and the need to widen the concept of "expertise" in debates surrounding the future development of AI.


If You Think Men Are in Crisis Now … Just Wait

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Men, you may have heard, are in crisis. The causes are many, and the left and right identify different villains (toxic masculinity, according to the former; feminism and gynecocracy, says the latter). But there seems to be a growing consensus that something is rotten in man-land. And across the political spectrum there is at least a consensus that male despair and disconnect is fueled in large part by dramatic changes in society, the economy, and the family--all of which have left many men feeling dangerously unmoored, isolated, and purposeless.


The supercomputer set to supercharge America's AI future

FOX News

A growing number of fire departments across the country are turning to artificial intelligence to help detect and respond to wildfires more quickly. A major breakthrough in artificial intelligence and high-performance computing is on the way, and it's coming from Georgia Tech. Backed by a 20 million investment from the National Science Foundation (NSF), the university is building a supercomputer named Nexus. It's expected go online in spring 2026. Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.


Pink Floppy Disc and The Bitles: Embracing the future of AI music

New Scientist

Feedback is New Scientist's popular sideways look at the latest science and technology news. You can submit items you believe may amuse readers to Feedback by emailing feedback@newscientist.com Feedback has been dimly aware for a while that there is a slew of AI-generated music swamping platforms like Spotify. Our awareness was limited, we confess, because we are so old that we still prefer to listen to CDs. Still, we weren't too surprised when New Scientist's Timothy Revell told us about an indie rock band called The Velvet Sundown that appears to be entirely AI-generated, from their songs, which sound like the beige love-children of Coldplay and the Eagles, to their uncanny-valley Instagram photos, which look like rejected concept art from Daisy Jones & the Six.


Data-Driven Self-Supervised Learning for the Discovery of Solution Singularity for Partial Differential Equations

Cai, Difeng, Sepúlveda, Paulina

arXiv.org Machine Learning

The appearance of singularities in the function of interest constitutes a fundamental challenge in scientific computing. It can significantly undermine the effectiveness of numerical schemes for function approximation, numerical integration, and the solution of partial differential equations (PDEs), etc. The problem becomes more sophisticated if the location of the singularity is unknown, which is often encountered in solving PDEs. Detecting the singularity is therefore critical for developing efficient adaptive methods to reduce computational costs in various applications. In this paper, we consider singularity detection in a purely data-driven setting. Namely, the input only contains given data, such as the vertex set from a mesh. To overcome the limitation of the raw unlabeled data, we propose a self-supervised learning (SSL) framework for estimating the location of the singularity. A key component is a filtering procedure as the pretext task in SSL, where two filtering methods are presented, based on $k$ nearest neighbors and kernel density estimation, respectively. We provide numerical examples to illustrate the potential pathological or inaccurate results due to the use of raw data without filtering. Various experiments are presented to demonstrate the ability of the proposed approach to deal with input perturbation, label corruption, and different kinds of singularities such interior circle, boundary layer, concentric semicircles, etc.


Futurist who predicted the iPhone reveals date humans will cheat death

Daily Mail - Science & tech

A leading futurist who accurately predicted the rise of the iPhone has now set the date for humanity's most phenomenal breakthrough yet, the ability to cheat death. Ray Kurzweil, a former Google engineering director, has long been known for his bold predictions about the future of technology and humanity. His forecasts often focus on the convergence of biotech, AI, and nanotechnology to radically extend human capabilities. Now, Kurzweil claims humanity is just four years away from its most transformative leap yet, achieving'longevity escape velocity' by 2029. While some experts remain skeptical, Kurzweil's influence in Silicon Valley ensures his predictions continue to shape the broader conversation around life extension and the future of human health.


Tech billionaires are making a risky bet with humanity's future

MIT Technology Review

While there's a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the "ideology of technological salvation" and warns that tech titans are using it to steer humanity in a dangerous direction. "In most of these isms you'll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders--so long as we don't get in the way of technological progress." "The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more--to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology, [and] to justify nearly any action they might want to take," he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.